4,095 research outputs found

    Face and Object Recognition and Detection Using Colour Vector Quantisation

    Get PDF
    In this paper we present an approach to face and object detection and recognition based on an extension of the contentbased image retrieval method of Lu and Teng (1999). The method applies vector quantisation (VQ) compression to the image stream and uses Mahalonobis weighted Euclidean distance between VQ histograms as the measure of image similarity. This distance measure retains both colour and spatial feature information but has the useful property of being relatively insensitive to changes in scale and rotation. The method is applied to real images for face recognition and face detection applications. Tracking and object detection can be coded relatively efficiently due to the data reduction afforded by VQ compression of the data stream. Additional computational efficiency is obtained through a variation of the tree structured fast VQ algorithm also presented here

    Homogenised Virtual Support Vector Machines

    Get PDF
    In many domains, reliable a priori knowledge exists that may be used to improve classifier performance. For example in handwritten digit recognition, such a priori knowledge may include classification invariance with respect to image translations and rotations. In this paper, we present a new generalisation of the Support Vector Machine (SVM) that aims to better incorporate this knowledge. The method is an extension of the Virtual SVM, and penalises an approximation of the variance of the decision function across each grouped set of "virtual examples", thus utilising the fact that these groups should ideally be assigned similar class membership probabilities. The method is shown to be an efficient approximation of the invariant SVM of Chapelle and Scholkopf, with the advantage that it can be solved by trivial modification to standard SVM optimization packages and negligible increase in computational complexity when compared with the Virtual SVM. The efficacy of the method is demonstrated on a simple problem

    Support Vector Machines for Business Applications

    Get PDF
    This chapter discusses the usage of Support Vector Machines (SVM) for business applications. It provides a brief historical background on inductive learning and pattern recognition, and then an intuitive motivation for SVM methods. The method is compared to other approaches, and the tools and background theory required to successfully apply SVMs to business applications are introduced. The authors hope that the chapter will help practitioners to understand when the SVM should be the method of choice, as well as how to achieve good results in minimal time

    Kernel Based Algebraic Curve Fitting

    Get PDF
    An algebraic curve is defined as the zero set of a multivariate polynomial. We consider the problem of fitting an algebraic curve to a set of vectors given an additional set of vectors labelled as interior or exterior to the curve. The problem of fitting a linear curve in this way is shown to lend itself to a support vector representation, allowing non-linear curves and high dimensional surfaces to be estimated using kernel functions. The approach is attractive due to the stability of solutions obtained, the range of functional forms made possible (including polynomials), and the potential for applying well understood regularisation operators from the theory of Support Vector Machines

    Implicit surfaces with globally regularised and compactly supported basis functions

    Get PDF
    We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data

    Algebraic Curve Fitting Support Vector Machines

    Get PDF
    An algebraic curve is defined as the zero set of a multivariate polynomial. We consider the problem of fitting an algebraic curve to a set of vectors given an additional set of vectors labelled as interior or exterior to the curve. The problem of fitting a linear curve in this way is shown to lend itself to a support vector representation, allowing non-linear curves and high dimensional surfaces to be estimated using kernel functions. The approach is attractive due to the stability of solutions obtained, the range of functional forms made ossible (including polynomials), and the potential for applying well understood regularisation operators from the theory of Support Vector Machines

    Improved Classification Using Hidden Markov Averaging From Multiple Observation Sequences

    Get PDF
    The enormous popularity of Hidden Markov models (HMMs) in spatio-temporal pattern recognition is largely due to the ability to 'learn' model parameters from observation sequences through the Baum-Welch and other re-estimation procedures. In this study, HMM parameters are estimated from an ensemble of models trained on individual observation sequences. The proposed methods are shown to provide superior classification performance to competing methods

    Efficient non-parametric Bayesian hawkes processes

    Full text link
    © 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. In this paper, we develop an efficient nonparametric Bayesian estimation of the kernel function of Hawkes processes. The non-parametric Bayesian approach is important because it provides flexible Hawkes kernels and quantifies their uncertainty. Our method is based on the cluster representation of Hawkes processes. Utilizing the stationarity of the Hawkes process, we efficiently sample random branching structures and thus, we split the Hawkes process into clusters of Poisson processes. We derive two algorithms - a block Gibbs sampler and a maximum a posteriori estimator based on expectation maximization - and we show that our methods have a linear time complexity, both theoretically and empirically. On synthetic data, we show our methods to be able to infer flexible Hawkes triggering kernels. On two large-scale Twitter diffusion datasets, we show that our methods outperform the current state-of-the-art in goodness-of-fit and that the time complexity is linear in the size of the dataset. We also observe that on diffusions related to online videos, the learned kernels reflect the perceived longevity for different content types such as music or pets videos

    Towards a Maximum Entropy Method for Estimating HMM Parameters

    Get PDF
    Training a Hidden Markov Model (HMM) to maximise the probability of a given sequence can result in over-fitting. That is, the model represents the training sequence well, but fails to generalise. In this paper, we present a possible solution to this problem, which is to maximise a linear combination of the likelihood of the training data, and the entropy of the model. We derive the necessary equations for gradient based maximisation of this combined term. The performance of the system is then evaluated in comparison with three other algorithms, on a classification task using synthetic data. The results indicate that the method is potentially useful. The main problem with the method is the computational intractability of the entropy calculation

    Variational Inference for Sparse Gaussian Process Modulated Hawkes Process

    Full text link
    The Hawkes process (HP) has been widely applied to modeling self-exciting events including neuron spikes, earthquakes and tweets. To avoid designing parametric triggering kernel and to be able to quantify the prediction confidence, the non-parametric Bayesian HP has been proposed. However, the inference of such models suffers from unscalability or slow convergence. In this paper, we aim to solve both problems. Specifically, first, we propose a new non-parametric Bayesian HP in which the triggering kernel is modeled as a squared sparse Gaussian process. Then, we propose a novel variational inference schema for model optimization. We employ the branching structure of the HP so that maximization of evidence lower bound (ELBO) is tractable by the expectation-maximization algorithm. We propose a tighter ELBO which improves the fitting performance. Further, we accelerate the novel variational inference schema to linear time complexity by leveraging the stationarity of the triggering kernel. Different from prior acceleration methods, ours enjoys higher efficiency. Finally, we exploit synthetic data and two large social media datasets to evaluate our method. We show that our approach outperforms state-of-the-art non-parametric frequentist and Bayesian methods. We validate the efficiency of our accelerated variational inference schema and practical utility of our tighter ELBO for model selection. We observe that the tighter ELBO exceeds the common one in model selection
    • …
    corecore